Challenges: Designing intuitive AI collaborative interfaces, balancing AI automation with user control, handling AI output uncertainty while effectively assisting users in image generation, and reducing/effectively utilizing iterations to create innovative products.
Goals: Creating creative assistance user interfaces that can understand user intentions or enable users to effectively control generated content.
Challenges: Bridging the expressiveness gap between traditional human movements and robotic actions, addressing dimensional differences between motion capture and robotic systems, complexities in action segmentation and semantic understanding, and translating nuanced human emotional expressions into executable robotic instructions.
Goals: Conveying and presenting the charm of traditional arts through robotic interaction while demonstrating traditional artistic expressiveness in robots to make them more approachable and affable.
Challenges: Reducing cognitive load in VR environments, enhancing immersion, optimizing VLA training efficiency and accuracy.
Goals: Creating more natural and efficient virtual environment interaction methods while seeking more effective training approaches for hardware within virtual spaces.
Python, HTML, JavaScript, CSS, React.js, Unity, C#, Node.js
TensorFlow, PyTorch, OpenAI API, Hugging Face, LLM integration
Figma, Adobe Creative Suite, 3D modeling tools, design thinking methodologies
User studies, A/B testing, ethnographic research, usability testing, user experience evaluation
VR headsets, motion capture systems, robotics platforms, sensors, Arduino
I am particularly interested in the application of LLM+Agent in UI and HCI, as well as VLM and VLA training for robotic motion control and human-robot interaction.
I integrate these cutting-edge technologies into my research through experimental studies and prototype development, exploring how these emerging technologies can enhance human-machine collaboration and create more intuitive interaction experiences.